7 research outputs found
Data Models for Dataset Drift Controls in Machine Learning With Images
Camera images are ubiquitous in machine learning research. They also play a
central role in the delivery of important services spanning medicine and
environmental surveying. However, the application of machine learning models in
these domains has been limited because of robustness concerns. A primary
failure mode are performance drops due to differences between the training and
deployment data. While there are methods to prospectively validate the
robustness of machine learning models to such dataset drifts, existing
approaches do not account for explicit models of the primary object of
interest: the data. This makes it difficult to create physically faithful drift
test cases or to provide specifications of data models that should be avoided
when deploying a machine learning model. In this study, we demonstrate how
these shortcomings can be overcome by pairing machine learning robustness
validation with physical optics. We examine the role raw sensor data and
differentiable data models can play in controlling performance risks related to
image dataset drift. The findings are distilled into three applications. First,
drift synthesis enables the controlled generation of physically faithful drift
test cases. The experiments presented here show that the average decrease in
model performance is ten to four times less severe than under post-hoc
augmentation testing. Second, the gradient connection between task and data
models allows for drift forensics that can be used to specify
performance-sensitive data models which should be avoided during deployment of
a machine learning model. Third, drift adjustment opens up the possibility for
processing adjustments in the face of drift. This can lead to speed up and
stabilization of classifier training at a margin of up to 20% in validation
accuracy. A guide to access the open code and datasets is available at
https://github.com/aiaudit-org/raw2logit.Comment: LO and MA contributed equall
Data models for dataset drift controls in machine learning with optical images
Camera images are ubiquitous in machine learning research. They also play a central role
in the delivery of important public services spanning medicine or environmental surveying.
However, the application of machine learning models in these domains has been limited
because of robustness concerns. A primary failure mode are performance drops due to
differences between the training and deployment data. While there are methods to prospectively validate the robustness of machine learning models to such dataset drifts, existing
approaches do not account for explicit models of machine learning’s primary object of interest:
the data. This limits our ability to study and understand the relationship between data
generation and downstream machine learning model performance in a physically accurate
manner. In this study, we demonstrate how to overcome this limitation by pairing traditional
machine learning with physical optics to obtain explicit and differentiable data models. We
demonstrate how such data models can be constructed for image data and used to control
downstream machine learning model performance related to dataset drift. The findings
are distilled into three applications. First, drift synthesis enables the controlled generation
of physically faithful drift test cases to power model selection and targeted generalization.
Second, the gradient connection between machine learning task model and data model allows
advanced, precise tolerancing of task model sensitivity to changes in the data generation.
These drift forensics can be used to precisely specify the acceptable data environments
in which a task model may be run. Third, drift optimization opens up the possibility to
create drifts that can help the task model learn better faster, effectively optimizing the
data generating process itself to support the downstream machine vision task. This is an
interesting upgrade to existing imaging pipelines which traditionally have been optimized to
be consumed by human users but not machine learning models. The data models require
access to raw sensor images as commonly processed at scale in industry domains such as
microscopy, biomedicine, autonomous vehicles or remote sensing. Alongside the data model
code we release two datasets to the public that we collected as part of this work. In total,
the two datasets, Raw-Microscopy and Raw-Drone, comprise 1,488 scientifically calibrated
reference raw sensor measurements, 8,928 raw intensity variations as well as 17,856 images
processed through twelve data models with different configurations. A guide to access the
open code and datasets is available at https://github.com/aiaudit-org/raw2logit
Dataset - High resolution Optical Projection Tomography platform for multispectral imaging of the mouse gut
AbstractSetup design, microbeads projections and FBP reconstruction code relying to the 'High resolution Optical Projection Tomography platform for multispectral imaging of the mouse gut' paper
High resolution optical projection tomography platform for multispectral imaging of the mouse gut
Optical projection tomography (OPT) is a powerful tool for three-dimensional imaging of mesoscopic biological samples with great use for biomedical phenotyping studies. We present a fluorescent OPT platform that enables direct visualization of biological specimens and processes at a centimeter scale with high spatial resolution, as well as fast data throughput and reconstruction. We demonstrate nearly isotropic sub-28 ÎĽm resolution over more than 60 mm3 after reconstruction of a single acquisition. Our setup is optimized for imaging the mouse gut at multiple wavelengths. Thanks to a new sample preparation protocol specifically developed for gut specimens, we can observe the spatial arrangement of the intestinal villi and the vasculature network of a 3-cm long healthy mouse gut. Besides the blood vessel network surrounding the gastrointestinal tract, we observe traces of vasculature at the villi ends close to the lumen. The combination of rapid acquisition and a large field of view with high spatial resolution in 3D mesoscopic imaging holds an invaluable potential for gastrointestinal pathology research
Data models for dataset drift controls in machine learning with optical images
Camera images are ubiquitous in machine learning research. They also play a central role in the delivery of important services spanning medicine and environmental surveying. However, the application of machine learning models in these domains has been limited because of robustness concerns. A primary failure mode are performance drops due to differences between the training and deployment data. While there are methods to prospectively validate the robustness of machine learning models to such dataset drifts, existing approaches do not account for explicit models of the primary object of interest: the data. This limits our ability to study and understand the relationship between data generation and downstream machine learning model performance in a physically accurate manner. In this study, we demonstrate how to overcome this limitation by pairing traditional machine learning with physical optics to obtain explicit and differentiable data models. We demonstrate how such data models can be constructed for image data and used to control downstream machine learning model performance related to dataset drift. The findings are distilled into three applications. First, drift synthesis enables the controlled generation of physically faithful drift test cases to power model selection and targeted generalization. Second, the gradient connection between machine learning task model and data model allows advanced, precise tolerancing of task model sensitivity to changes in the data generation. These drift forensics can be used to precisely specify the acceptable data environments in which a task model may be run. Third, drift optimization opens up the possibility to create drifts that can help the task model learn better faster, effectively optimizing the data generating process itself. A guide to access the open code and datasets is available at https://github.com/aiaudit-org/raw2logit
Opening the black box of traumatic brain injury ::a holistic approach combining human 3D neural tissue and an in vitro traumatic brain injury induction device
Traumatic brain injury (TBI) is caused by a wide range of physical events and can induce an even larger spectrum of short- to long-term pathophysiologies. Neuroscientists have relied on animal models to understand the relationship between mechanical damages and functional alterations of neural cells. These in vivo and animal-based in vitro models represent important approaches to mimic traumas on whole brains or organized brain structures but are not fully representative of pathologies occurring after traumas on human brain parenchyma. To overcome these limitations and to establish a more accurate and comprehensive model of human TBI, we engineered an in vitro platform to induce injuries via the controlled projection of a small drop of liquid onto a 3D neural tissue engineered from human iPS cells. With this platform, biological mechanisms involved in neural cellular injury are recorded through electrophysiology measurements, quantification of biomarkers released, and two imaging methods [confocal laser scanning microscope (CLSM) and optical projection tomography (OPT)]. The results showed drastic changes in tissue electrophysiological activities and significant releases of glial and neuronal biomarkers. Tissue imaging allowed us to reconstruct the injured area spatially in 3D after staining it with specific nuclear dyes and to determine TBI resulting in cell death. In future experiments, we seek to monitor the effects of TBI-induced injuries over a prolonged time and at a higher temporal resolution to better understand the subtleties of the biomarker release kinetics and the cell recovery phases